4 research outputs found

    Facial analysis in video : detection and recognition

    Get PDF
    Biometric authentication systems automatically identify or verify individuals using physiological (e.g., face, fingerprint, hand geometry, retina scan) or behavioral (e.g., speaking pattern, signature, keystroke dynamics) characteristics. Among these biometrics, facial patterns have the major advantage of being the least intrusive. Automatic face recognition systems thus have great potential in a wide spectrum of application areas. Focusing on facial analysis, this dissertation presents a face detection method and numerous feature extraction methods for face recognition. Concerning face detection, a video-based frontal face detection method has been developed using motion analysis and color information to derive field of interests, and distribution-based distance (DBD) and support vector machine (SVM) for classification. When applied to 92 still images (containing 282 faces), this method achieves 98.2% face detection rate with two false detections, a performance comparable to the state-of-the-art face detection methods; when applied to videQ streams, this method detects faces reliably and efficiently. Regarding face recognition, extensive assessments of face recognition performance in twelve color spaces have been performed, and a color feature extraction method defined by color component images across different color spaces is shown to help improve the baseline performance of the Face Recognition Grand Challenge (FRGC) problems. The experimental results show that some color configurations, such as YV in the YUV color space and YJ in the YIQ color space, help improve face recognition performance. Based on these improved results, a novel feature extraction method implementing genetic algorithms (GAs) and the Fisher linear discriminant (FLD) is designed to derive the optimal discriminating features that lead to an effective image representation for face recognition. This method noticeably improves FRGC ver1.0 Experiment 4 baseline recognition rate from 37% to 73%, and significantly elevates FRGC xxxx Experiment 4 baseline verification rate from 12% to 69%. Finally, four two-dimensional (2D) convolution filters are derived for feature extraction, and a 2D+3D face recognition system implementing both 2D and 3D imaging modalities is designed to address the FRGC problems. This method improves FRGC ver2.0 Experiment 3 baseline performance from 54% to 72%

    Face detection using discriminating feature analysis and Support Vector

    No full text
    This paper presents a novel face detection method by applying discriminating feature analysis (DFA) and support vector machine (SVM). The novelty of our DFA–SVM method comes from the integration of DFA, face class modeling, and SVM for face detection. First, DFA derives a discriminating feature vector by combining the input image, its 1-D Haar wavelet representation, and its amplitude projections. While the Haar wavelets produce an effective representation for object detection, the amplitude projections capture the vertical symmetric distributions and the horizontal characteristics of human face images. Second, face class modeling estimates the probability density function of the face class and defines a distribution-based measure for face and nonface classification. The distribution-based measure thus separates the input patterns into three classes: the face class (patterns close to the face class), the nonface class (patterns far away from the face class), and the undecided class (patterns neither close to nor far away from the face class). Finally, SVM together with the distributionbased measure classifies the patterns in the undecided class into either the face class or the nonface class. Experiments using images from the MIT–CMU test sets demonstrate the feasibility of our new face detection method. In particular, when using 92 images (containing 282 faces) from the MIT–CMU test sets, our DFA–SVM method achieves 98.2 % correct face detection rate with two false detections

    International Journal of Pattern Recognition and Artificial Intelligence c â—‹ World Scientific Publishing Company Comparative Assessment of Content-Based Face Image Retrieval in Different Color Spaces

    No full text
    Content-based face image retrieval is concerned with computer retrieval of face images (of a given subject) based on the geometric or statistical features automatically derived from these images. It is well known that color spaces provide powerful information for image indexing and retrieval by means of color invariants, color histogram, color texture, etc.. This paper assesses comparatively the performance of content-based face image retrieval in different color spaces using a standard algorithm, the Principal Component Analysis (PCA), which has become a popular algorithm in the face recognition community. In particular, we comparatively assess 12 color spaces (RGB, HSV, Y UV, Y CbCr, XY Z, Y IQ, L ∗ a ∗ b ∗ , U ∗ V ∗ W ∗ , L ∗ u ∗ v ∗ , I1I2I3, HSI, and rgb) by evaluating 7 color configurations for every single color space. A color configuration is defined by an individual or a combination of color component images. Take the RGB color spac
    corecore